Close

%0 Conference Proceedings
%4 sid.inpe.br/sibgrapi/2011/07.11.01.52
%2 sid.inpe.br/sibgrapi/2011/07.11.01.52.29
%@doi 10.1109/SIBGRAPI.2011.41
%T Transfer Learning for Human Action Recognition
%D 2011
%A Lopes, Ana Paula B.,
%A Santos, Elerson R. da S.,
%A Valle, Eduardo A.,
%A Almeida, Jussara M. de,
%A Araújo, Arnaldo de Albuquerque,
%@affiliation Depart. of Computer Science - Universidade Federal de Minas Gerais (UFMG),Belo Horizonte (MG), Brazil and Depart. of Exact and Tech. Sciences - Universidade Estadual de Santa Cruz (UESC),ilhéus, Brazil
%@affiliation Depart. of Computer Science - Universidade Federal de Minas Gerais (UFMG),Belo Horizonte (MG), Brazil
%@affiliation Universidade Estadual de Campinas (UNICAMP), Campinas (SP), Brazil
%@affiliation Depart. of Computer Science - Universidade Federal de Minas Gerais (UFMG),Belo Horizonte (MG), Brazil
%@affiliation Depart. of Computer Science - Universidade Federal de Minas Gerais (UFMG),Belo Horizonte (MG), Brazil
%E Lewiner, Thomas,
%E Torres, Ricardo,
%B Conference on Graphics, Patterns and Images, 24 (SIBGRAPI)
%C Maceió, AL, Brazil
%8 28-31 Aug. 2011
%I IEEE Computer Society
%J Los Alamitos
%S Proceedings
%K action recognition, transfer learning, bags-of-visual-features, video understanding.
%X To manually collect action samples from realistic videos is a time-consuming and error-prone task. This is a serious bottleneck to research related to video understanding, since the large intra-class variations of such videos demand training sets large enough to properly encompass those variations. Most authors dealing with this issue rely on (semi-) automated procedures to collect additional, generally noisy, examples. In this paper, we exploit a different approach, based on a Transfer Learning (TL) technique, to address the target task of action recognition. More specifically, we propose a framework that transfers the knowledge about concepts from a previously labeled still image database to the target action video database. It is assumed that, once identified in the target action database, these concepts provide some contextual clues to the action classifier. Our experiments with Caltech256 and Hollywood2 databases indicate: a) the feasibility of successfully using transfer learning techniques to detect concepts and, b) that it is indeed possible to enhance action recognition with the transferred knowledge of even a few concepts. In our case, only four concepts were enough to obtain statistically significant improvements for most actions.
%@language en
%3 PID1979911.pdf


Close